telemetry: public /api/stats endpoint (Tier B showcase)#1
Conversation
Adds a stdlib-only Vercel Python serverless function at api/stats.py that exposes honest, GitHub-derived metrics about the codebase. The endpoint is consumed by the Production Telemetry panel on https://eleventh.dev. Per the schema at https://github.com/IgnazioDS/IgnazioDS/blob/main/TELEMETRY_SCHEMA.md this system runs in showcase mode (Tier B): the Vercel deploy is a public landing page, not a system serving production workload. Rather than fabricate eval_runs_total / last_pass_rate / regressions_caught_30d counters that would have nothing to count, the endpoint reports real signals about the codebase: - commits_30d, commits_total via GitHub Link header pagination - primary_language, repo_stars via GET /repos/:owner/:repo - last_commit_at via GET /repos/.../commits?per_page=1 - lines_of_code via committed api/_telemetry_static.json - mode = "showcase" explicit Tier-B signal for the widget Implementation: - BaseHTTPRequestHandler entrypoint, no third-party deps - 5-min module-scope cache stays under GitHub's 60-req/hr unauth limit - SAFETY_CAPS clamps every counter - Never returns HTTP 5xx: GitHub failures degrade to status="degraded" with last-good-cache or zeroed metrics - CORS headers set both at vercel.json edge and inside the handler - Tests: 6/6 unittest pass (happy, degraded, stale-cache, caps, handler 200, OPTIONS 204) To refresh lines_of_code before deploying: python3 scripts/compute_telemetry_static.py git add api/_telemetry_static.json
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 359e3433cf
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| last_commit_at = ( | ||
| latest_commits[0].get("commit", {}).get("author", {}).get("date") | ||
| ) |
There was a problem hiding this comment.
Use committer date for last_commit_at
last_commit_at is currently taken from commit.author.date, which can be arbitrarily old on cherry-picks or rebased history, so the endpoint can report a stale “last commit” time even when new commits were just applied. This impacts telemetry accuracy for the primary status widget; using commit.committer.date (or repo pushed_at) avoids false regressions in freshness.
Useful? React with 👍 / 👎.
Summary
api/stats.pyexposing the public Tier-B telemetry contractcommits_30d,commits_total,last_commit_at,primary_language,repo_stars,lines_of_code) withmode: "showcase"flagged so the homepage widget renders this tile as a deployed scaffold rather than a live workload systemWhy showcase mode
Per the schema at https://github.com/IgnazioDS/IgnazioDS/blob/main/TELEMETRY_SCHEMA.md, this repo is currently a static landing page with a CLI scaffold. Reporting fabricated
eval_runs_total/last_pass_rate/regressions_caught_30dcounters would violate the credibility constraint of the homepage telemetry panel. The endpoint instead reports real signals about the codebase. When promoted to live workload, the implementation upgrades to Tier A andmodeflips tolive.Verification (local with real GitHub call)
{ "system": "evalops", "mode": "showcase", "status": "operational", "last_deployed_at": "2026-04-27T18:41:57Z", "last_commit_at": "2026-04-01T16:54:50Z", "metrics": { "commits_30d": 1, "commits_total": 3, "primary_language": "Python", "repo_stars": 0, "lines_of_code": 1177 }, "schema_version": 1, "generated_at": "2026-04-27T18:42:18Z" }Test plan
python3 -m unittest tests.test_stats— 6/6 passcurl <preview>/api/statsreturns HTTP 200https://eleventh.devcurl https://evalops-workbench.vercel.app/api/stats